Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
The development of transformer-based models has resulted in significant advances in addressing various vision and NLP-based research challenges. However, the progress made in transformer-based methods has not been effectively applied to biosensor/physiological signal-based emotion recognition research. The reasons are that transformers require large training data, and most of the biosensor datasets are not large enough to train these models. To address this issue, we propose a novel Unified Biosensor–Vision Multimodal Transformer (UBVMT) architecture, which enables self-supervised pretraining by extracting Remote Photoplethysmography (rPPG) signals from videos in the large CMU-MOSEI dataset. UBVMT classifies emotions in the arousal-valence space by combining a 2D representation of ECG/PPG signals with facial information. As opposed to modality-specific architecture, our novel unified architecture of UBVMT consists of homogeneous transformer blocks that take as input the image-based representation of the biosensor signals and the corresponding face information for emotion representation. This minimal modality-specific design reduces the number of parameters in UBVMT by half compared to conventional multimodal transformer networks, enabling its application in our web-based system, where loading large models poses significant memory challenges. UBVMT is pretrained in a self-supervised manner by employing masked autoencoding to reconstruct masked patches of video frames and 2D scalogram images of ECG/PPG signals, and contrastive modeling to align face and ECG/PPG data. Extensive experiments on publicly available datasets show that our UBVMT-based model produces comparable results to state-of-the-art techniques.more » « lessFree, publicly-accessible full text available April 1, 2026
-
Social media platforms and online gaming sites play a pervasive role in facilitating peer interaction and social development for adolescents, but they also pose potential threats to health and safety. It is crucial to tackle cyberbullying issues within these platforms to ensure the healthy social development of adolescents. Cyberbullying has been linked to adverse mental health outcomes among adolescents, including anxiety, depression, academic underperformance, and an increased risk of suicide. While cyberbullying is a concern for all adolescents, those with disabilities are particularly susceptible and face a higher risk of being targets of cyberbullying. Our research addresses these challenges by introducing a personalized online virtual companion guided by artificial intelligence (AI). The web-based virtual companion’s interactions aim to assist adolescents in detecting cyberbullying. More specifically, an adolescent with ASD watches a cyberbullying scenario in a virtual environment, and the AI virtual companion then asks the adolescent if he/she detected cyberbullying. To inform the virtual companion in real time to know if the adolescent has learned about detecting cyberbullying, we have implemented fast and lightweight cyberbullying detection models employing the T5-small and MobileBERT networks. Our experimental results show that we obtain comparable results to the state-of-the-art methods despite having a compact architecture.more » « less
-
The options for Artificial intelligence (AI) tools used in teacher education are increasing daily, but more is only sometimes better for teachers working in already complex classroom settings. This team discusses the increase of AI in schools and provides an example from administrators, teacher educators, and computer scientists of an AI virtual agent and the research to support student learning and teachers in classroom settings. The authors discuss the creation and potential of virtual characters in elementary classrooms, combined with biometrics and facial emotional recognition, which in this study has impacted student learning and offered support to the teacher. The researchers share the development of the AI agent, the lessons learned, the integration of biometrics and facial tracking, and how teachers use this emerging form of AI both in classroom-based center activities and to support students’ emotional regulation. The authors conclude by describing the application of this type of support in teacher preparation programs and a vision of the future of using AI agents in instruction.more » « less
-
Recognizing the affective state of children with autism spectrum disorder (ASD) in real-world settings poses challenges due to the varying head poses, illumination levels, occlusion and a lack of datasets annotated with emotions in in-the-wild scenarios. Understanding the emotional state of children with ASD is crucial for providing personalized interventions and support. Existing methods often rely on controlled lab environments, limiting their applicability to real-world scenarios. Hence, a framework that enables the recognition of affective states in children with ASD in uncontrolled settings is needed. This paper presents a framework for recognizing the affective state of children with ASD in an in-the-wild setting using heart rate (HR) information. More specifically, an algorithm is developed that can classify a participant’s emotion as positive, negative, or neutral by analyzing the heart rate signal acquired from a smartwatch. The heart rate data are obtained in real time using a smartwatch application while the child learns to code a robot and interacts with an avatar. The avatar assists the child in developing communication skills and programming the robot. In this paper, we also present a semi-automated annotation technique based on facial expression recognition for the heart rate data. The HR signal is analyzed to extract features that capture the emotional state of the child. Additionally, in this paper, the performance of a raw HR-signal-based emotion classification algorithm is compared with a classification approach based on features extracted from HR signals using discrete wavelet transform (DWT). The experimental results demonstrate that the proposed method achieves comparable performance to state-of-the-art HR-based emotion recognition techniques, despite being conducted in an uncontrolled setting rather than a controlled lab environment. The framework presented in this paper contributes to the real-world affect analysis of children with ASD using HR information. By enabling emotion recognition in uncontrolled settings, this approach has the potential to improve the monitoring and understanding of the emotional well-being of children with ASD in their daily lives.more » « less
-
Many studies have demonstrated the usefulness of virtual characters in educational settings; however, widespread adoption of such tools is limited by development costs and accessibility. This article describes a novel platform Web Automated Virtual Environment (WAVE) to deliver virtual experiences through the web. The system integrates data acquired from a variety of sources in a manner that allows the virtual characters to exhibit behaviors that are appropriate to the designer’s goals, such as providing support for users based on understanding their activities and their emotional states. Our WAVE platform overcomes the challenge of the scalability of the human-in-the-loop model by employing a web-based system and triggering automated character behavior. Therefore, we plan to make WAVE freely accessible (part of the Open Education Resources) and available anytime, anywhere.more » « less
-
The authors present the design and implementation of an exploratory virtual learning environment that assists children with autism (ASD) in learning science, technology, engineering, and mathematics (STEM) skills along with improving social-emotional and communication skills. The primary contribution of this exploratory research is how educational research informs technological advances in triggering a virtual AI companion (AIC) for children in need of social-emotional and communication skills development. The AIC adapts to students’ varying levels of needed support. This project began by using puppetry control (human-in-the-loop) of the AIC, assisting students with ASD in learning basic coding, practicing their social skills with the AIC, and attaining emotional recognition and regulation skills for effective communication and learning. The student is given the challenge to program a robot, Dash™, to move in a square. Based on observed behaviors, the puppeteer controls the virtual agent’s actions to support the student in coding the robot. The virtual agent’s actions that inform the development of the AIC include speech, facial expressions, gestures, respiration, and heart color changes coded to indicate emotional state. The paper provides exploratory findings of the first 2 years of this 5-year scaling-up research study. The outcomes discussed align with a common approach of research design used for students with disabilities, called single case study research. This type of design does not involve random control trial research; instead, the student acts as her or his own control subject. Students with ASD have substantial individual differences in their social skill deficits, behaviors, communications, and learning needs, which vary greatly from the norm and from other individuals identified with this disability. Therefore, findings are reported as changes within subjects instead of across subjects. While these exploratory observations serve as a basis for longer term research on a larger population, this paper focuses less on student learning and more on evolving technology in AIC and supporting students with ASD in STEM environments.more » « less
-
null (Ed.)Purpose The purpose of this paper is to provide a detailed accounting of energy and materials consumed during magnetic resonance imaging (MRI). Design/methodology/approach The first and second stages of ISO standard (ISO 14040:2006 and ISO 14044:2006) were followed to develop life cycle inventory (LCI). The LCI data collection took the form of observations, time studies, real-time metered power consumption, review of imaging department scheduling records and review of technical manuals and literature. Findings The carbon footprint of the entire MRI service on a per-patient basis was measured at 22.4 kg CO 2 eq. The in-hospital energy use (process energy) for performing MRI is 29 kWh per patient for the MRI machine, ancillary devices and light fixtures, while the out-of-hospital energy consumption is approximately 260 percent greater than the process energy, measured at 75 kWh per patient related to fuel for generation and transmission of electricity for the hospital, plus energy to manufacture disposable, consumable and reusable products. The actual MRI and standby energy that produces the MRI images is only about 38 percent of the total life cycle energy. Research limitations/implications The focus on methods and proof-of-concept meant that only one facility and one type of imaging device technology were used to reach the conclusions. Based on the similar studies related to other imaging devices, the provided transparent data can be generalized to other healthcare facilities with few adjustments to utilization ratios, the share of the exam types, and the standby power of the facilities’ imaging devices. Practical implications The transparent detailed life cycle approach allows the data from this study to be used by healthcare administrators to explore the hidden public health impact of the radiology department and to set goals for carbon footprint reductions of healthcare organizations by focusing on alternative imaging modalities. Moreover, the presented approach in quantifying healthcare services’ environmental impact can be replicated to provide measurable data on departmental quality improvement initiatives and to be used in hospitals’ quality management systems. Originality/value No other research has been published on the life cycle assessment of MRI. The share of outside hospital indirect environmental impact of MRI services is a previously undocumented impact of the physician’s order for an internal image.more » « less
An official website of the United States government
